The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them. While existing benchmarks measure the correlation with human judgements of faithfulness on model-generated summaries, they are insufficient for diagnosing whether metrics are: 1) consistent, i.e., decrease as errors are introduced into a summary, 2) effective on human-written texts, and 3) sensitive to different error types (as summaries can contain multiple errors). To address these needs, we present a benchmark of unfaithful minimal pairs (BUMP), a dataset of 889 human-written, minimally different summary pairs, where a single error (from an ontology of 7 types) is introduced to a summary from the CNN/DailyMail dataset to produce an unfaithful summary. We find BUMP complements existing benchmarks in a number of ways: 1) the summaries in BUMP are harder to discriminate and less probable under SOTA summarization models, 2) BUMP enables measuring the consistency of metrics, and reveals that the most discriminative metrics tend not to be the most consistent, 3) BUMP enables the measurement of metrics' performance on individual error types and highlights areas of weakness for future work.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, neural networks have proven their impressive ability to solve partial differential equations (PDEs). Among them, Fourier neural operator (FNO) has shown success in learning solution operators for highly non-linear problems such as turbulence flow. FNO is discretization-invariant, where it can be trained on low-resolution data and generalizes to problems with high-resolution. This property is related to the low-pass filters in FNO, where only a limited number of frequency modes are selected to propagate information. However, it is still a challenge to select an appropriate number of frequency modes and training resolution for different PDEs. Too few frequency modes and low-resolution data hurt generalization, while too many frequency modes and high-resolution data are computationally expensive and lead to over-fitting. To this end, we propose Incremental Fourier Neural Operator (IFNO), which augments both the frequency modes and data resolution incrementally during training. We show that IFNO achieves better generalization (around 15% reduction on testing L2 loss) while reducing the computational cost by 35%, compared to the standard FNO. In addition, we observe that IFNO follows the behavior of implicit regularization in FNO, which explains its excellent generalization ability.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Fabric manipulation is a long-standing challenge in robotics due to the enormous state space and complex dynamics. Learning approaches stand out as promising for this domain as they allow us to learn behaviours directly from data. Most prior methods however rely heavily on simulation, which is still limited by the large sim-to-real gap of deformable objects or rely on large datasets. A promising alternative is to learn fabric manipulation directly from watching humans perform the task. In this work, we explore how demonstrations for fabric manipulation tasks can be collected directly by human hands, providing an extremely natural and fast data collection pipeline. Then, using only a handful of such demonstrations, we show how a sample-efficient pick-and-place policy can be learned and deployed on a real robot, without any robot data collection at all. We demonstrate our approach on a fabric folding task, showing that our policy can reliably reach folded states from crumpled initial configurations.
translated by 谷歌翻译
已经开发了许多本体论,即描述逻辑(DL)知识库,以提供有关各个领域的丰富知识,并且其中许多基于ALC,即原型和表达的DL或其扩展。探索ALC本体论的主要任务是计算语义范围。符号方法可以保证声音和完整的语义需要,但对不一致和缺失信息敏感。为此,我们提出了一个模糊的ALC本体神经推理器Falcon。 Falcon使用模糊逻辑运算符为任意ALC本体论生成单个模型结构,并使用多个模型结构来计算语义索引。理论结果表明,保证猎鹰是计算ALC本体学语义索引的声音和完整算法。实验结果表明,Falcon不仅可以近似推理(不完整的本体理由)和chanseansissist的推理(因本体不一致的推理),还可以通过结合ALC本体的背景知识来改善生物医学领域的机器学习。
translated by 谷歌翻译
我们提出了一种方法,用于估计具有单个RGB图像的可用3D模型的刚性对象的6DOF姿势。与基于经典对应的方法不同,该方法可以预测输入图像的像素的3D对象坐标,该建议的方法可以预测3D对象坐标在相机frustum中采样的3D查询点。从像素到3D点的移动,这是受到3D重建方法的最新PIFU式方法的启发,可以对整个对象(包括(自我)遮挡部分)进行推理。对于与与像素对齐的图像功能相关的3D查询点,我们训练完全连接的神经网络来预测:(i)相应的3D对象坐标,以及(ii)签名到对象表面的签名距离,首先定义仅适用于地表附近的查询点。我们将该网络实现的映射称为神经通信字段。然后,通过Kabsch-Ransac算法从预测的3D-3D对应关系中稳健地估计对象姿势。所提出的方法在三个BOP数据集上实现了最先进的结果,并且在咬合挑战性案例中表现出了优越。项目网站在:linhuang17.github.io/ncf。
translated by 谷歌翻译
本文表明,球形卷积神经网络(S-CNN)在估算从扩散MRI(DMRI)的组织微结构的标量参数时,比常规完全连接的网络(FCN)具有不同的优势。这样的微观结构参数对于识别病理学和量化其程度很有价值。但是,当前的临床实践通常获取仅由6个扩散加权图像(DWI)组成的DMRI数据,从而限制了估计的微观结构指数的准确性和精度。已经提出了机器学习(ML)来应对这一挑战。但是,现有的基于ML的方法对于不同的DMRI梯度采样方案并不强大,它们也不是旋转等效的。对抽样方案缺乏鲁棒性需要为每个方案培训一个新的网络,从而使来自多个来源的数据分析变得复杂。缺乏旋转模棱两可的可能结果是,训练数据集必须包含各种微叠加方向。在这里,我们显示球形CNN代表了一种引人注目的替代方案,该替代方案对新的采样方案以及提供旋转模棱两可。我们表明可以利用后者以减少所需的训练数据点的数量。
translated by 谷歌翻译
机器学习(ML)提供了在具有较大特征空间和复杂关联的数据中通常在数据中检测和建模关联的强大方法。已经开发了许多有用的工具/软件包(例如Scikit-learn),以使数据处理,处理,建模和解释的各种要素可访问。但是,对于大多数研究人员来说,将这些元素组装成严格,可复制,无偏见和有效的数据分析管道并不是微不足道的。自动化机器学习(AUTOML)试图通过简化所有人的ML分析过程来解决这些问题。在这里,我们介绍了一个简单,透明的端到端汽车管道,设计为一个框架,以轻松进行严格的ML建模和分析(最初限于二进制分类)。 Streamline专门设计用于比较数据集,ML算法和其他AutoML工具之间的性能。通过使用精心设计的一系列管道元素,通过提供完全透明且一致的比较基线,它是独特的,包括:(1)探索性分析,(2)基本数据清洁,(3)交叉验证分区,(4)数据缩放和插补,(5)基于滤波器的特征重要性估计,(6)集体特征选择,(7)通过15个已建立算法的“ Optuna”超参数优化的ML建模(包括较不知名的基因编程和基于规则的ML ),(8)跨16个分类指标的评估,(9)模型特征重要性估计,(10)统计显着性比较,以及(11)自动导出所有结果,图,PDF摘要报告以及可以轻松应用于复制数据。
translated by 谷歌翻译
为了使软机器人在以人为本的环境中有效工作,他们需要能够根据(本体感受)传感器估算其状态和外部相互作用。估计干扰使软机器人可以执行理想的力控制。即使在刚性操纵器的情况下,最终效应器的力估计也被视为一个非平凡的问题。实际上,其他当前应对这一挑战的方法也存在防止其一般应用的缺点。它们通常基于简化的软动力学模型,例如依赖于零件的恒定曲率(PCC)近似值或匹配的刚体模型的模型,这些模型并不代表该问题的细节。因此,无法构建复杂的人类机器人互动所需的应用。有限元方法(FEM)允许以更通用的方式预测软机器人动力学。在这里,使用框架沙发的软机器人建模功能,我们构建了一个详细的FEM模型,该模型由多段的软连续机器人手臂组成,该机器人由合规的可变形材料和纤维增强的压力驱动室组成,并具有用于提供方向输出的传感器的模型。该模型用于为操纵器建立状态观察者。校准模型参数以使用物理实验匹配手动制造过程的缺陷。然后,我们解决了二次编程逆动力学问题,以计算解释姿势误差的外力的组成部分。我们的实验显示,平均力估计误差约为1.2%。由于提出的方法是通用的,因此这些结果令人鼓舞,该任务是构建可以在以人为中心的环境中部署的复杂,反应性,基于传感器的行为的软机器人。
translated by 谷歌翻译